Do Bayesians Learn Their Way Out of Ambiguity?

نویسنده

  • Alexander Zimper
چکیده

In standard models of Bayesian learning agents reduce their uncertainty about an event’s true probability because their consistent estimator concentrates almost surely around this probability’s true value as the number of observations becomes large. This paper takes the empirically observed violations of Savage’s (1954) sure thing principle seriously and asks whether Bayesian learners with ambiguity attitudes will reduce their ambiguity when sample information becomes large. To address this question, I develop closed-form models of Bayesian learning in which beliefs are described as Choquet estimators with respect to neo-additive capacities (Chateauneuf, Eichberger, and Grant 2007). Under the optimistic, the pessimistic, and the full Bayesian update rule, a Bayesian learner’s ambiguity will increase rather than decrease to the e¤ect that these agents will express ambiguity attitudes regardless of whether they have access to large sample information or not. While consistent Bayesian learning occurs under the Sarin-Wakker update rule, this result comes with the descriptive drawback that it does not apply to agents who still express ambiguity attitudes after one round of updating. Keywords: Non-additive Probability Measures, Bayesian Learning, Choquet Expected Utility Theory JEL Classi…cation Numbers: C11, D81, D83 I thank Alex Ludwig for helpful comments and suggestions. Financial support from Economic Research Southern Africa (ERSA) is gratefully acknowledged. yDepartment of Economics, University of Pretoria, Private Bag X20, Hat…eld 0028, South Africa. E-mail: [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

From the Editors - Probability Scoring Rules, Ambiguity, Multiattribute Terrorist Utility, and Sensitivity Analysis

T issue’s “From the Editors” column is coauthored with all the associate editors, to emphasize their major role in the leadership of the journal. We first review this year’s operations and thank our editorial board and referees. Our first article, by David J. Johnstone, Victor Richmond R. Jose, and Robert L. Winkler, presents “Tailored Scoring Rules for Probabilities,” which take into account t...

متن کامل

Partisan Bias and the Bayesian Ideal in the Study of Public Opinion

Bayes’ Theorem is increasingly used as a benchmark against which to judge the quality of citizens’ thinking, but some of its implications are not well understood. A common claim is that Bayesians must agree more as they learn and that the failure of partisans to do the same is evidence of bias in their responses to new information. Formal inspection of Bayesian learning models shows that this i...

متن کامل

Dynamic Semi-Consistency Very Preliminary and Incomplete

The behavior of dynamically consistent agents who follow through with any ex ante optimal plan cannot be distinguished from the behavior of Bayesians. The notion of dynamic semi-consistency allows for models in which different ambiguity attitudes generate different predictions while at the same time keeping the normatively appealing tenet that agents follow through with ex ante optimal pure str...

متن کامل

Bayesians Can Learn From Old Data

In a widely-cited paper, Glymour [1] claims to show that Bayesians cannot learn from old data. His argument contains an elementary error. I explain exactly where Glymour went wrong, and how the problem should be handled correctly. When the problem is fixed, it is seen that Bayesians, just like logicians, can indeed learn from old data. Outline of the Paper I first review some aspects of standar...

متن کامل

On Bayesian problem-solving: helping Bayesians solve simple Bayesian word problems

(2015) On Bayesian problem-solving: helping Bayesians solve simple Bayesian word problems. Resolving the " Bayesian Paradox " —Bayesians Who Failed to Solve Bayesian Problems A well-supported conclusion a reader would draw from the vast amount of research on Bayesian inference could be distilled into one sentence: " People are profoundly Bayesians, but they fail to solve Bayesian word problems....

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Decision Analysis

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2011